10 research outputs found

    Building kilobots and revising kilobot design for improving the optical response

    Get PDF
    Inspired by the emergent behavior of swarms, we want to eventually use a distributed self-organizing swarm of robots for shape formation. To verify the idea using real robots in the experiment, we need to first build more Kilobots to enlarge our repository of Kilobots. Kilobot is a kind of small robot with a 33-mm diameter that was originally designed by Harvard in 2012 and redesigned at WCU with the simplified building process in 2016 (WCU Kilobot version 1.1). Based on the earlier design, we have redesigned the Kilobots further (with three revisions in version 1.2, 1.3, and 1.4). This research work describes the challenges and solutions in building and debugging Kilobots, as well as the planned shape formation operation. Kilobots are built in-house using reflow soldering for surface mount components and hand soldering for through-hole components. A systematic debugging procedure, as well as the most commonly seen issues and their solutions, are described based on our building and testing experience. The WCU Kilobot version 1.1 was designed in PADS, and yet we no longer had the license in PADS. Therefore, we redid the schematics and PCB layout in Altium Designer, and enlarged the spacing between the crowded components, in WCU Kilobot version 1.2. Although the design of version 1.2 was nearly the same as in version 1.1 with only added spacing, it was redone in Altium Designer that we could continue to maintain a license, and hence our later revisions were possible. In shape formation, a phototaxis movement (moving away from light) is the driving force in the large-scale reductive approach, and yet the original Kilobot design allows such movement only in a dark room because of the ambient light sensor output is saturated at a low illumination level. An experiment was conducted to examine the saturation of sensor reading at increasing lux levels with different phototransistor’s emitter resistances, and a new resistance value of emitter resistance was proposed and implemented in our Kilobots (version 1.3), to ease the experiment lighting condition, making it more lenient and convenient than before, even at daylight. An earlier capstone experiment in 2018-2019 seemed to indicate that the flash memory ofATmega328P, the microcontroller on the Kilobot, was not enough to handle the calculation when more than three Kilobots with known or calculated locations were used for multilateration-based locationing for the next robot that needed to calculate its location. To address this issue, we have updated the design of Kilobot to replace its ATmega328p (with 32 Kbytes memory) microcontroller with ATmega1284, which has 128 Kbytes of flash memory for programming(version 1.4). In addition, we also inspected the feasibility of installing two ambient light sensors at opposite sides of the Kilobot and found the version 1.3 was more sensitive than version 1.1 to provide distinctive readings even at a distance increment of one Kilobot diameter, which meant that the Kilobot could easily tell the direction of the light with two sensors. Given the new microcontroller in version 1.4 with more IO channels, we further revised it to add a second ambient light sensor, which will help to give us more control on the Kilobot when they perform a light based movement, such as in shape formation

    Short-Term Rainfall Prediction Using Supervised Machine Learning

    Get PDF
    Floods and rain significantly impact the economy of many agricultural countries in the world. Early prediction of rain and floods can dramatically help prevent natural disaster damage. This paper presents a machine learning and data-driven method that can accurately predict short-term rainfall. Various machine learning classification algorithms have been implemented on an Australian weather dataset to train and develop an accurate and reliable model. To choose the best suitable prediction model, diverse machine learning algorithms have been applied for classification as well. Eventually, the performance of the models has been compared based on standard performance measurement metrics. The finding shows that the hist gradient boosting classifier has given the highest accuracy of 91%, with a good F1 value and receiver operating characteristic, the area under the curve score

    Support Directional Shifting Vector: A Direction Based Machine Learning Classifier

    Get PDF
    Machine learning models have been very popular nowadays for providing rigorous solutions to complicated real-life problems. There are three main domains named supervised, unsupervised, and reinforcement. Supervised learning mainly deals with regression and classification. There exist several types of classification algorithms, and these are based on various bases. The classification performance varies based on the dataset velocity and the algorithm selection. In this article, we have focused on developing a model of angular nature that performs supervised classification. Here, we have used two shifting vectors named Support Direction Vector (SDV) and Support Origin Vector (SOV) to form a linear function. These vectors form a linear function to measure cosine-angle with both the target class data and the non-target class data. Considering target data points, the linear function takes such a position that minimizes its angle with target class data and maximizes its angle with non-target class data. The positional error of the linear function has been modelled as a loss function which is iteratively optimized using the gradient descent algorithm. In order to justify the acceptability of this method, we have implemented this model on three different standard datasets. The model showed comparable accuracy with the existing standard supervised classification algorithm. Doi: 10.28991/esj-2021-01306 Full Text: PD

    Support directional shifting vector: A direction based machine learning classifier

    Get PDF
    Machine learning models have been very popular nowadays for providing rigorous solutions to complicated real-life problems. There are three main domains named supervised, unsupervised, and reinforcement. Supervised learning mainly deals with regression and classification. There exist several types of classification algorithms, and these are based on various bases. The classification performance varies based on the dataset velocity and the algorithm selection. In this article, we have focused on developing a model of angular nature that performs supervised classification. Here, we have used two shifting vectors named Support Direction Vector (SDV) and Support Origin Vector (SOV) to form a linear function. These vectors form a linear function to measure cosine-angle with both the target class data and the non-target class data. Considering target data points, the linear function takes such a position that minimizes its angle with target class data and maximizes its angle with non-target class data. The positional error of the linear function has been modelled as a loss function which is iteratively optimized using the gradient descent algorithm. In order to justify the acceptability of this method, we have implemented this model on three different standard datasets. The model showed comparable accuracy with the existing standard supervised classification algorithm

    A Review on Heart Diseases Prediction Using Artificial Intelligence

    No full text
    Heart disease is one of the major concerns of this modern world. The insufficiency of the experts has made this issue a bigger concern. Diagnosing heart diseases at an early stage is possible with Artificial Intelligence (AI) techniques, which will lessen the needed number of experts. This paper has initially discussed different kinds of heart diseases and the importance of detecting them early. Two popular diagnosis systems for collecting data and their working function are then highlighted. Different types of Model architectures in the corresponding field are described. Firstly, the Support Vector Machine (SVM) machine learning algorithm is described, and secondly, popular deep learning model architecture such as Convolutional Neural Network (CNN), Recurrent Neural Network (RNN), Long Short-Term Memory (LSTM), etc. are highlighted to detect heart disease. Finally, discussion, comparison, and future work are described. This article aims to clarify AI’s present and future state in medical technology to predict heart diseases

    A deep learning approach for lane marking detection applying encode-decode instant segmentation network

    No full text
    A lot of people suffer from disability and death due to unintentional road accidents, which also result in the loss of a significant amount of financial assets. Several essential features of Advanced Driver Assistance Systems (ADAS) are being incorporated into vehicles by researchers to prevent road accidents. Lane marking detection (LMD) is a fundamental ADAS technology that helps the vehicle to keep its position in the lane. The current study employs Deep Learning (DL) methodologies and has several research constraints due to various problems. Researchers sometimes encounter difficulties in LMD due to environmental factors such as the variation of lights, obstacles, shadows, and curve lanes. To address these limitations, this study presents the Encode-Decode Instant Segmentation Network (EDIS-Net) as a DL methodology for detecting lane marking under various environmental situations with reliable accuracy. The framework is based on the E-Net architecture and incorporates combined cross-entropy and discriminative losses. The encoding segment was split into binary and instant segmentation to extract information about the lane pixels and the pixel position. DenselyBased Spatial Clustering of Application with Noise (DBSCAN) is employed to connect the predicted lane pixels and to get the final output. The system was trained with augmented data from the Tusimple dataset and then tested on three datasets: Tusimple, CalTech, and a local dataset. On the Tusimple dataset, the model achieved 97.39% accuracy. Furthermore, it has an average accuracy of 97.07% and 96.23% on the CalTech and local datasets, respectively. On the testing dataset, the EDIS-Net exhibited promising results compared to existing LMD approaches. Since the proposed framework performs better on the testing datasets, it can be argued that the model can recognize lane marking confidently in various scenarios. This study presents a novel EDIS-Net technique for efficient lane marking detection. It also includes the model's performance verification by testing in three different public datasets

    A Comprehensive Review on Lane Marking Detection Using Deep Neural Networks

    No full text
    Lane marking recognition is one of the most crucial features for automotive vehicles as it is one of the most fundamental requirements of all the autonomy features of Advanced Driver Assistance Systems (ADAS). Researchers have recently made promising improvements in the application of Lane Marking Detection (LMD). This research article has taken the initiative to review lane marking detection, mainly using deep learning techniques. This paper initially discusses the introduction of lane marking detection approaches using deep neural networks and conventional techniques. Lane marking detection frameworks can be categorized into single-stage and two-stage architectures. This paper elaborates on the network’s architecture and the loss function for improving the performance based on the categories. The network’s architecture is divided into object detection, classification, and segmentation, and each is discussed, including their contributions and limitations. There is also a brief indication of the simplification and optimization of the network for simplifying the architecture. Additionally, comparative performance results with a visualization of the final output of five existing techniques is elaborated. Finally, this review is concluded by pointing to particular challenges in lane marking detection, such as generalization problems and computational complexity. There is also a brief future direction for solving the issues, for instance, efficient neural network, Meta, and unsupervised learning

    A deep learning approach for COVID-19 and pneumonia detection from chest X-ray images

    Get PDF
    There has been a surge in biomedical imaging technologies with the recent advancement of deep learning. It is being used for diagnosis from X-ray, computed tomography (CT) scan, electrocardiogram (ECG), and electroencephalography (EEG) images. However, most of them are solely for particular disease detection. In this research, a computer-aided deep learning model named COVID-CXDNetV2 has been presented to detect two separate diseases, coronavirus disease 2019 (COVID-19) and pneumonia, from the X-ray images in real-time. The proposed model is made based on you only look once (YOLOv2) with residual neural network (ResNet) and trained by a vast X-ray images dataset containing 3788 samples of three classes named COVID-19 pneumonia and normal. The model has obtained the maximum overall classification accuracy of 97.9% with a loss of 0.052 for multiclass classification (COVID-19, pneumonia, and normal) and 99.8% accuracy, 99.52% sensitivity, 100% specificity with a loss of 0.001 for binary classification (COVID-19 and normal), which beats some current state-of-the-art results. Authors believe that this method will be applicable in the medical domain for the diagnosis and will significantly contribute to real life

    Convenient Way to Detect Ulcer in Wireless Capsule Endoscopy Through Fuzzy Logic Technique

    No full text
    The ulcer is one of the most common and dangerous among the effect of many deadly diseases in the Gastrointestinal tract. It is complicated to diagnose and detect the tiny intestine ulcers by applying other alternative methods of endoscopy. Wireless Capsule Endoscopy (WCE) technique is rapidly using more conveniently to visualize these ulcers. However, it is challenging and time-consuming for the clinicians to check the vast amount of images captured from the WCE. So, it has become the most crucial concern to provide an automated system for detecting the ulcer to help the clinicians. In this research paper, a unique automatic ulcer diagnosis model is introduced to detect ulcers from images that have been converted from the captured WCE video. In the proposed method, Some consecutive approaches, like pre-processing and fuzzy logic framework, have been applied for extracting the ulcer portion on L*a*b colour model. The proposed method has obtained a tremendous result of sensitivity 95%, accuracy 95.5%, specificity 97%, F1 score 96.48%, precision 98%, and negative predicted value 91% by utilizing the statistical feature and KNN classifier. Therefore, from the analysis of the analytical results and comparison studies, it is highly optimistic about having a positive impact on this research arena

    Small intestine bleeding detection using color threshold and morphological operation in WCE images

    Get PDF
    Wireless capsule endoscopy (WCE) is a significant modern technique for observing the whole gastroenterological tract to diagnose various diseases like bleeding, ulcer, tumor, Crohn's disease, polyps etc in a non-invasive manner. However, it will make a substantial onus for physicians like human oversight errors with time consumption for manual checking of a vast amount of image frames. These problems motivate the researchers to employ a computer-aided system to classify the particular information from the image frames. Therefore, a computer-aided system based on the color threshold and morphological operation has been proposed in this research to recognize specified bleeding images from the WCE. Besides, A unique classifier, quadratic support vector machine (QSVM) has been employed for classifying the bleeding and non-bleeding images with the statistical feature vector in HSV color space. After extensive experiments on clinical data, 95.8% accuracy, 95% sensitivity, 97% specificity, 80% precision, 99% negative predicted value and 85% F1 score has been achieved, which outperforms some of the existing methods in this regard. It is expected that this methodology would bring a significant contribution to the WCE technology
    corecore